144 research outputs found

    Bio-inspired speed detection and discrimination

    Get PDF
    In the field of computer vision, a crucial task is the detection of motion (also called optical flow extraction). This operation allows analysis such as 3D reconstruction, feature tracking, time-to-collision and novelty detection among others. Most of the optical flow extraction techniques work within a finite range of speeds. Usually, the range of detection is extended towards higher speeds by combining some multiscale information in a serial architecture. This serial multi-scale approach suffers from the problem of error propagation related to the number of scales used in the algorithm. On the other hand, biological experiments show that human motion perception seems to follow a parallel multiscale scheme. In this work we present a bio-inspired parallel architecture to perform detection of motion, providing a wide range of operation and avoiding error propagation associated with the serial architecture. To test our algorithm, we perform relative error comparisons between both classical and proposed techniques, showing that the parallel architecture is able to achieve motion detection with results similar to the serial approach

    Embedded harmonic control for dynamic trajectory planning on

    Get PDF
    This paper presents a parallel hardware implementation of a well-known navigation control method on reconfigurable digital circuits. Trajectories are estimated after an iterated computation of the harmonic functions, given the goal and obstacle positions of the navigation problem. The proposed massively distributed implementation locally computes the direction to choose to get to the goal position at any point of the environment. Changes in this environment may be immediately taken into account, for example when obstacles are discovered during an on-line exploration. The implementation results show that the proposed architecture simultaneously improves speed, power consumption, precision, and environment size.

    FPNA, FPNN: from programmable fields to topologically simplified neural networks

    Get PDF
    Rapport interne.This work aims at developping neural architectures that are easy to map onto FPGAs, thanks to a simplified topology and an original data exchange scheme, without having any loss of approximation capability. Thie report proposes a brief overview of FPNA and FPNN definitions, computations and implementations

    Hardware-friendly neural computation of symmetric boolean functions

    Get PDF
    Rapport interne.The theoretical and practical framework of Field Programmable Neural Arrays has been defined to reconcile simple hardware topologies with complex neural architectures: FPNAs lead to powerful neural models whose original data exchange scheme allows to use hardware-friendly neural topologies. This report addresses preliminary results in the study of the computation power of FPNAs. The computation of symmetric boolean functions (e.g. the n-dimensional parity problem) is taken as a textbook example. The FPNA concept allows successive topology simplifications of standard neural models for such functions, so that the number of weights is reduced with a factor up to n with respect to previous works

    Correctness of the FPNA neural paradigm

    Get PDF
    Rapport interne.Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graphs of standard neural models can not be handled by digital hardware devices. A new theoretical and practical framework allows to reconcile simple hardware topologies with complex neural architectures: Field Programmable Neural Arrays (FPNA) lead to powerful neural architectures that are easy to map onto digital hardware, thanks to a simplified topology and an original data exchange scheme. This report describes the basic principles of the FPNA paradigm. Formal definitions are introduced and illustrated. Two computation methods for feedforward FPNAs are introduced. The proof of their correctness is sketched

    Synchronous FPNNs: neural models that fit reconfigurable hardware

    Get PDF
    Rapport interne.Neural networks are considered as naturally parallel computing models. But the number of operators and the complex connection graph of standard neural models can not be handled by digital hardware devices. Neural network hardware implementations have to reconcile simple hardware topologies with complex neural architectures. A theoretical and practical framework allows this combination by means of some configurable hardware principles applied to neural computation: Field Programmable Neural Arrays (FPNA) lead to powerful neural architectures that are easy to map onto FPGAs, thanks to a simplified topology and an original data exchange scheme, without having any significant loss of approximation capability. This report follows the overview of FPNAs in report 99.R.019: it focuses on a family of FPNA-based neural networks that are more specially adapted to reconfigurable hardware

    Enhanced representation learning with temporal coding in sparsely spiking neural networks

    Get PDF
    Current representation learning methods in Spiking Neural Networks (SNNs) rely on rate-based encoding, resulting in high spike counts, increased energy consumption, and slower information transmission. In contrast, our proposed method, Weight-Temporally Coded Representation Learning (W-TCRL), utilizes temporally coded inputs, leading to lower spike counts and improved efficiency. To address the challenge of extracting representations from a temporal code with low reconstruction error, we introduce a novel Spike-Timing-Dependent Plasticity (STDP) rule. This rule enables stable learning of relative latencies within the synaptic weight distribution and is locally implemented in space and time, making it compatible with neuromorphic processors. We evaluate the performance of W-TCRL on the MNIST and natural image datasets for image reconstruction tasks. Our results demonstrate relative improvements of 53% for MNIST and 75% for natural images in terms of reconstruction error compared to the SNN state of the art. Additionally, our method achieves significantly higher sparsity, up to 900 times greater, when compared to related work. These findings emphasize the efficacy of W-TCRL in leveraging temporal coding for enhanced representation learning in Spiking Neural Networks

    Visual pattern classification by neural fields

    Get PDF
    National audienceThe recognition of visual motion patterns such as walking, fighting and face gestures among others, is remarkably efficient in humans and many other species. Experiments have already given some clues about the nature of the internal mechanisms of recognition. These experiments are based on point-light stimuli in psychophysics, electro-physiological data and functional imaging techniques. In this work, we study some of the identified properties and we propose to model them by means of asymmetric neural fields
    • …
    corecore